AAAI.2020 - Knowledge Representation and Reasoning

Total: 46

#1 Learning and Reasoning for Robot Sequential Decision Making under Uncertainty [PDF] [Copy] [Kimi]

Authors: Saeid Amiri ; Mohammad Shokrolah Shirazi ; Shiqi Zhang

Robots frequently face complex tasks that require more than one action, where sequential decision-making (sdm) capabilities become necessary. The key contribution of this work is a robot sdm framework, called lcorpp, that supports the simultaneous capabilities of supervised learning for passive state estimation, automated reasoning with declarative human knowledge, and planning under uncertainty toward achieving long-term goals. In particular, we use a hybrid reasoning paradigm to refine the state estimator, and provide informative priors for the probabilistic planner. In experiments, a mobile robot is tasked with estimating human intentions using their motion trajectories, declarative contextual knowledge, and human-robot interaction (dialog-based and motion-based). Results suggest that, in efficiency and accuracy, our framework performs better than its no-learning and no-reasoning counterparts in office environment.

#2 Query Rewriting for Ontology-Mediated Conditional Answers [PDF] [Copy] [Kimi]

Authors: Medina Andresel ; Magdalena Ortiz ; Mantas Simkus

Among many solutions for extracting useful answers from incomplete data, ontology-mediated queries (OMQs) use domain knowledge to infer missing facts. We propose an extension of OMQs that allows us to make certain assumptions—for example, about parts of the data that may be unavailable at query time, or costly to query—and retrieve conditional answers, that is, tuples that become certain query answers when the assumptions hold. We show that querying in this powerful formalism often has no higher worst-case complexity than in plain OMQs, and that these queries are first-order rewritable for DL-Liteℛ. Rewritability is preserved even if we allow some use of closed predicates to combine the (partial) closed- and open-world assumptions. This is remarkable, as closed predicates are a very useful extension of OMQs, but they usually make query answering intractable in data complexity, even in very restricted settings.

#3 Revisiting the Foundations of Abstract Argumentation – Semantics Based on Weak Admissibility and Weak Defense [PDF] [Copy] [Kimi]

Authors: Ringo Baumann ; Gerhard Brewka ; Markus Ulbricht

In his seminal 1995 paper, Dung paved the way for abstract argumentation, a by now major research area in knowledge representation. He pointed out that there is a problematic issue with self-defeating arguments underlying all traditional semantics. A self-defeat occurs if an argument attacks itself either directly or indirectly via an odd attack loop, unless the loop is broken up by some argument attacking the loop from outside. Motivated by the fact that such arguments represent self-contradictory or paradoxical arguments, he asked for reasonable semantics which overcome the problem that such arguments may indeed invalidate any argument they attack. This paper tackles this problem from scratch. More precisely, instead of continuing to use previous concepts defined by Dung we provide new foundations for abstract argumentation, so-called weak admissibility and weak defense. After showing that these key concepts are compatible as in the classical case we introduce new versions of the classical Dung-style semantics including complete, preferred and grounded semantics. We provide a rigorous study of these new concepts including interrelationships as well as the relations to their Dung-style counterparts. The newly introduced semantics overcome the issue with self-defeating arguments, and they are semantically insensitive to syntactic deletions of self-attacking arguments, a special case of self-defeat.

#4 Forgetting an Argument [PDF] [Copy] [Kimi]

Authors: Ringo Baumann ; Dov Gabbay ; Odinaldo Rodrigues

The notion of forgetting, as considered in the famous paper by Lin and Reiter in 1994 has been extensively studied in classical logic and more recently, in non-monotonic formalisms like logic programming. In this paper, we convey the idea of forgetting to another major AI formalism, namely Dung-style argumentation frameworks. Our approach is axiomatic-driven and not limited to any specific semantics: we propose semantical and syntactical desiderata encoding different criteria for what forgetting an argument might mean; analyze how these criteria relate to each other; and check whether the criteria can be satisfied in general. The analysis is done for a number of widely used argumentation semantics. Our investigation shows that almost all desiderata are individually satisfiable. However, combinations of semantical and/or syntactical conditions reveal a much more interesting landscape. For instance, we found that the ad hoc approach to forgetting an argument, i.e., by the syntactical removal of the argument and all of its associated attacks, is too restrictive and only compatible with the two weakest semantical desiderata. Amongst the several interesting combinations identified, we showed that one satisfies a notion of minimal change and presented an algorithm that given an AF F and argument x, constructs a suitable AF G satisfying the conditions in the combination.

#5 Checking Chase Termination over Ontologies of Existential Rules with Equality [PDF] [Copy] [Kimi]

Authors: David Carral ; Jacopo Urbani

The chase is a sound and complete algorithm for conjunctive query answering over ontologies of existential rules with equality. To enable its effective use, we can apply acyclicity notions; that is, sufficient conditions that guarantee chase termination. Unfortunately, most of these notions have only been defined for existential rule sets without equality. A proposed solution to circumvent this issue is to treat equality as an ordinary predicate with an explicit axiomatisation. We empirically show that this solution is not efficient in practice and propose an alternative approach. More precisely, we show that, if the chase terminates for any equality axiomatisation of an ontology, then it terminates for the original ontology (which may contain equality). Therefore, one can apply existing acyclicity notions to check chase termination over an axiomatisation of an ontology and then use the original ontology for reasoning. We show that, in practice, doing so results in a more efficient reasoning procedure. Furthermore, we present equality model-faithful acyclicity, a general acyclicity notion that can be directly applied to ontologies with equality.

#6 Model-Based Diagnosis with Uncertain Observations [PDF] [Copy] [Kimi]

Authors: Dean Cazes ; Meir Kalech

Classical model-based diagnosis uses a model of the system to infer diagnoses – explanations – of a given abnormal observation. In this work, we explore how to address the case where there is uncertainty over a given observation. This can happen, for example, when the observations are collected by noisy sensors, that are known to return incorrect observations with some probability. We formally define this common scenario for consistency-based and abductive models. In addition, we analyze the complexity of two complete algorithms we propose for finding all diagnoses and correctly ranking them. Finally, we propose a third algorithm that returns the most probable diagnosis without finding all possible diagnoses. Experimental evaluation shows that this third algorithm can be very effective in cases where the number of faults is small and the uncertainty over the observations is not large. If, however, all possible diagnoses are desired, then the choice between the first two algorithms depends on whether the domain's diagnosis form is abductive or consistent.

#7 ParamE: Regarding Neural Network Parameters as Relation Embeddings for Knowledge Graph Completion [PDF] [Copy] [Kimi]

Authors: Feihu Che ; Dawei Zhang ; Jianhua Tao ; Mingyue Niu ; Bocheng Zhao

We study the task of learning entity and relation embeddings in knowledge graphs for predicting missing links. Previous translational models on link prediction make use of translational properties but lack enough expressiveness, while the convolution neural network based model (ConvE) takes advantage of the great nonlinearity fitting ability of neural networks but overlooks translational properties. In this paper, we propose a new knowledge graph embedding model called ParamE which can utilize the two advantages together. In ParamE, head entity embeddings, relation embeddings and tail entity embeddings are regarded as the input, parameters and output of a neural network respectively. Since parameters in networks are effective in converting input to output, taking neural network parameters as relation embeddings makes ParamE much more expressive and translational. In addition, the entity and relation embeddings in ParamE are from feature space and parameter space respectively, which is in line with the essence that entities and relations are supposed to be mapped into two different spaces. We evaluate the performances of ParamE on standard FB15k-237 and WN18RR datasets, and experiments show ParamE can significantly outperform existing state-of-the-art models, such as ConvE, SACN, RotatE and D4-STE/Gumbel.

#8 Answering Conjunctive Queries with Inequalities in <em>DL-Lite</em><sub>ℛ</sub> [PDF] [Copy] [Kimi]

Authors: Gianluca Cima ; Maurizio Lenzerini ; Antonella Poggi

In the context of the Description Logic DL-Liteℛ≠, i.e., DL-Liteℛ without UNA and with inequality axioms, we address the problem of adding to unions of conjunctive queries (UCQs) one of the simplest forms of negation, namely, inequality. It is well known that answering conjunctive queries with unrestricted inequalities over DL-Liteℛ ontologies is in general undecidable. Therefore, we explore two strategies for recovering decidability, and, hopefully, tractability. Firstly, we weaken the ontology language, and consider the variant of DL-Liteℛ≠ corresponding to rdfs enriched with both inequality and disjointness axioms. Secondly, we weaken the query language, by preventing inequalities to be applied to existentially quantified variables, thus obtaining the class of queries named UCQ≠,bs. We prove that in the two cases, query answering is decidable, and we provide tight complexity bounds for the problem, both for data and combined complexity. Notably, the results show that answering UCQ≠,bs over DL-Liteℛ≠ ontologies is still in AC0 in data complexity.

#9 Epistemic Integrity Constraints for Ontology-Based Data Management [PDF] [Copy] [Kimi]

Authors: Marco Console ; Maurizio Lenzerini

Ontology-based data management (OBDM) is a powerful knowledge-oriented paradigm for managing data spread over multiple heterogeneous sources. In OBDM, the data sources of an information system are handled through the reconciled view provided by an ontology, i.e., the conceptualization of the underlying domain of interest expressed in some formal language. In any information systems where the basic knowledge resides in data sources, it is of paramount importance to specify the acceptable states of such information. Usually, this is done via integrity constraints, i.e., requirements that the data must satisfy formally expressed in some specific language. However, while the semantics of integrity constraints are clear in the context of databases, the presence of inferred information, typical of OBDM systems, considerably complicates the matter. In this paper, we establish a novel framework for integrity constraints in the OBDM scenarios, based on the notion of knowledge state of the information system. For integrity constraints in this framework, we define a language based on epistemic logic, and study decidability and complexity of both checking satisfaction and performing different forms of static analysis on them.

#10 Hypothetical Answers to Continuous Queries over Data Streams [PDF] [Copy] [Kimi]

Authors: Luís Cruz-Filipe ; Isabel Nunes ; Graça Gaspar

Continuous queries over data streams often delay answers until some relevant input arrives through the data stream. These delays may turn answers, when they arrive, obsolete to users who sometimes have to make decisions with no help whatsoever. Therefore, it can be useful to provide hypothetical answers – “given the current information, it is possible that X will become true at time t” – instead of no information at all. In this paper we present a semantics for queries and corresponding answers that covers such hypothetical answers, together with an online algorithm for updating the set of facts that are consistent with the currently available information.

#11 ElGolog: A High-Level Programming Language with Memory of the Execution History [PDF] [Copy] [Kimi]

Authors: Giuseppe De Giacomo ; Yves Lespérance ; Eugenia Ternovska

Most programming languages only support tests that refer exclusively to the current state. This applies even to high-level programming languages based on the situation calculus such as Golog. The result is that additional variables/fluents/data structures must be introduced to track conditions that the program uses in tests to make decisions. In this paper, drawing inspiration from McCarthy's Elephant 2000, we propose an extended version of Golog, called ElGolog, that supports rich tests about the execution history, where tests are expressed in a first-order variant of two-way linear dynamic logic that uses ElGolog programs with converse. We show that in spite of rich tests, ElGolog shares key features with Golog, including a sematics based on macroexpansion into situation calculus formulas, upon which regression can still be applied. We also show that like Golog, our extended language can easily be implemented in ElGolog.

#12 Efficient Model-Based Diagnosis of Sequential Circuits [PDF] [Copy] [Kimi]

Authors: Alexander Feldman ; Ingo Pill ; Franza Wotawa ; Ion Matei ; Johan de Kleer

In Model-Based Diagnosis (MBD), we concern ourselves with the health and safety of physical and software systems. Although we often use different knowledge representations and algorithms, some tools like satisfiability (SAT) solvers and temporal logics, are used in both domains. In this paper we introduce Finite Trace Next Logic (FTNL) models of sequential circuits and propose an enhanced algorithm for computing minimal-cardinality diagnoses. Existing state-of-the-art satisfiability algorithms for minimal diagnosis use Sorting Networks (SNs) for constraining the cardinality of the diagnostic candidates. In our approach we exploit Multi-Operand Adders (MOAs). Based on extensive tests with ISCAS-89 circuits, we found that MOAs enable Conjunctive Normal Form (CNF) encodings that are significantly more compact. These encodings lead to 19.7 to 67.6 times fewer variables and 18.4 to 62 times fewer clauses. For converting an FTNL model to CNF, we could achieve a speed-up ranging from 6.2 to 22.2. Using SNs fosters 3.4 to 5.5 times faster on-line satisfiability checking though. This makes MOAs preferable for applications where RAM and off-line time are more limited than on-line CPU time.

#13 Proportional Belief Merging [PDF] [Copy] [Kimi]

Authors: Adrian Haret ; Martin Lackner ; Andreas Pfandler ; Johannes P. Wallner

In this paper we introduce proportionality to belief merging. Belief merging is a framework for aggregating information presented in the form of propositional formulas, and it generalizes many aggregation models in social choice. In our analysis, two incompatible notions of proportionality emerge: one similar to standard notions of proportionality in social choice, the other more in tune with the logic-based merging setting. Since established merging operators meet neither of these proportionality requirements, we design new proportional belief merging operators. We analyze the proposed operators against established rationality postulates, finding that current approaches to proportionality from the field of social choice are, at their core, incompatible with standard rationality postulates in belief merging. We provide characterization results that explain the underlying conflict, and provide a complexity analysis of our novel operators.

#14 Structural Decompositions of Epistemic Logic Programs [PDF] [Copy] [Kimi]

Authors: Markus Hecher ; Michael Morak ; Stefan Woltran

Epistemic logic programs (ELPs) are a popular generalization of standard Answer Set Programming (ASP) providing means for reasoning over answer sets within the language. This richer formalism comes at the price of higher computational complexity reaching up to the fourth level of the polynomial hierarchy. However, in contrast to standard ASP, dedicated investigations towards tractability have not been undertaken yet. In this paper, we give first results in this direction and show that central ELP problems can be solved in linear time for ELPs exhibiting structural properties in terms of bounded treewidth. We also provide a full dynamic programming algorithm that adheres to these bounds. Finally, we show that applying treewidth to a novel dependency structure—given in terms of epistemic literals—allows to bound the number of ASP solver calls in typical ELP solving procedures.

#15 Going Deep: Graph Convolutional Ladder-Shape Networks [PDF] [Copy] [Kimi]

Authors: Ruiqi Hu ; Shirui Pan ; Guodong Long ; Qinghua Lu ; Liming Zhu ; Jing Jiang

Neighborhood aggregation algorithms like spectral graph convolutional networks (GCNs) formulate graph convolutions as a symmetric Laplacian smoothing operation to aggregate the feature information of one node with that of its neighbors. While they have achieved great success in semi-supervised node classification on graphs, current approaches suffer from the over-smoothing problem when the depth of the neural networks increases, which always leads to a noticeable degradation of performance. To solve this problem, we present graph convolutional ladder-shape networks (GCLN), a novel graph neural network architecture that transmits messages from shallow layers to deeper layers to overcome the over-smoothing problem and dramatically extend the scale of the neural networks with improved performance. We have validated the effectiveness of proposed GCLN at a node-wise level with a semi-supervised task (node classification) and an unsupervised task (node clustering), and at a graph-wise level with graph classification by applying a differentiable pooling operation. The proposed GCLN outperforms original GCNs, deep GCNs and other state-of-the-art GCN-based models for all three tasks, which were designed from various perspectives on six real-world benchmark data sets.

#16 Aggregation of Perspectives Using the Constellations Approach to Probabilistic Argumentation [PDF] [Copy] [Kimi]

Authors: Anthony Hunter ; Kawsar Noor

In the constellations approach to probabilistic argumentation, there is a probability distribution over the subgraphs of an argument graph, and this can be used to represent the uncertainty in the structure of the argument graph. In this paper, we consider how we can construct this probability distribution from data. We provide a language for data based on perspectives (opinions) on the structure of the graph, and we introduce a framework (based on general properties and some specific proposals) for aggregating these perspectives, and as a result obtaining a probability distribution that best reflects these perspectives. This can be used in applications such as summarizing collections of online reviews and combining conflicting reports.

#17 Least General Generalizations in Description Logic: Verification and Existence [PDF] [Copy] [Kimi]

Authors: Jean Christoph Jung ; Carsten Lutz ; Frank Wolter

We study two forms of least general generalizations in description logic, the least common subsumer (LCS) and most specific concept (MSC). While the LCS generalizes from examples that take the form of concepts, the MSC generalizes from individuals in data. Our focus is on the complexity of existence and verification, the latter meaning to decide whether a candidate concept is the LCS or MSC. We consider cases with and without a background TBox and a target signature. Our results range from coNP-complete for LCS and MSC verification in the description logic εℒ without TBoxes to undecidability of LCS and MSC verification and existence in εℒI with TBoxes. To obtain results in the presence of a TBox, we establish a close link between the problems studied in this paper and concept learning from positive and negative examples. We also give a way to regain decidability in εℒI with TBoxes and study single example MSC as a special case.

#18 Complexity and Expressive Power of Disjunction and Negation in Limit Datalog [PDF] [Copy] [Kimi]

Authors: Mark Kaminski ; Bernardo Cuenca Grau ; Egor V. Kostylev ; Ian Horrocks

Limit Datalog is a fragment of Datalogℤ—the extension of Datalog with arithmetic functions over the integers—which has been proposed as a declarative language suitable for capturing data analysis tasks. In limit Datalog programs, all intensional predicates with a numeric argument are limit predicates that keep maximal (or minimal) bounds on numeric values. Furthermore, to ensure decidability of reasoning, limit Datalog imposes a linearity condition restricting the use of multiplication in rules. In this paper, we study the complexity and expressive power of limit Datalog programs extended with disjunction in the heads of rules and non-monotonic negation under the stable model semantics. We show that allowing for unrestricted use of negation leads to undecidability of reasoning. Decidability can be restored by stratifying the use of negation over predicates carrying numeric values. We show that the resulting language is Π2EXP -complete in combined complexity and that it captures Π2P over ordered structures in the sense of descriptive complexity.We also provide a study of several fragments of this language: we show that the complexity and expressive power of the full language are already reached for disjunction-free programs; furthermore, we show that semi-positive disjunctive programs are coNEXPcomplete and that they capture coNP.

#19 Logics for Sizes with Union or Intersection [PDF] [Copy] [Kimi]

Authors: Caleb Kisby ; Saul Blanco ; Alex Kruckman ; Lawrence Moss

This paper presents the most basic logics for reasoning about the sizes of sets that admit either the union of terms or the intersection of terms. That is, our logics handle assertions All x y and AtLeast x y, where x and y are built up from basic terms by either unions or intersections. We present a sound, complete, and polynomial-time decidable proof system for these logics. An immediate consequence of our work is the completeness of the logic additionally permitting More x y. The logics considered here may be viewed as efficient fragments of two logics which appear in the literature: Boolean Algebra with Presburger Arithmetic and the Logic of Comparative Cardinality.

#20 FastLAS: Scalable Inductive Logic Programming Incorporating Domain-Specific Optimisation Criteria [PDF] [Copy] [Kimi]

Authors: Mark Law ; Alessandra Russo ; Elisa Bertino ; Krysia Broda ; Jorge Lobo

Inductive Logic Programming (ILP) systems aim to find a set of logical rules, called a hypothesis, that explain a set of examples. In cases where many such hypotheses exist, ILP systems often bias towards shorter solutions, leading to highly general rules being learned. In some application domains like security and access control policies, this bias may not be desirable, as when data is sparse more specific rules that guarantee tighter security should be preferred. This paper presents a new general notion of a scoring function over hypotheses that allows a user to express domain-specific optimisation criteria. This is incorporated into a new ILP system, called FastLAS, that takes as input a learning task and a customised scoring function, and computes an optimal solution with respect to the given scoring function. We evaluate the accuracy of FastLAS over real-world datasets for access control policies and show that varying the scoring function allows a user to target domain-specific performance metrics. We also compare FastLAS to state-of-the-art ILP systems, using the standard ILP bias for shorter solutions, and demonstrate that FastLAS is significantly faster and more scalable.

#21 Automatic Verification of Liveness Properties in the Situation Calculus [PDF] [Copy] [Kimi]

Authors: Jian Li ; Yongmei Liu

In dynamic systems, liveness properties concern whether something good will eventually happen. Examples of liveness properties are termination of programs and goal achievability. In this paper, we consider the following theorem-proving problem: given an action theory and a goal, check whether the goal is achievable in every model of the action theory. We make the assumption that there are finitely many non-number objects. We propose to use mathematical induction to address this problem: we identify a natural number feature and prove by mathematical induction that for any values of the feature, the goal is achievable. Both the basis and induction steps are verified using first-order theorem provers. We propose a simple method to identify potential features which are the number of objects satisfying a certain formula by generating small models of the action theory and calling a classical planner to achieve the goal. We also propose to regress the goal via different actions and then verify whether the resulting goals are achievable. We implemented the proposed method and experimented with the blocks world domain and a number of other domains from the literature. Experimental results showed that most goals can be verified within a reasonable amount of time.

#22 Path Ranking with Attention to Type Hierarchies [PDF] [Copy] [Kimi]

Authors: Weiyu Liu ; Angel Daruna ; Zsolt Kira ; Sonia Chernova

The objective of the knowledge base completion problem is to infer missing information from existing facts in a knowledge base. Prior work has demonstrated the effectiveness of path-ranking based methods, which solve the problem by discovering observable patterns in knowledge graphs, consisting of nodes representing entities and edges representing relations. However, these patterns either lack accuracy because they rely solely on relations or cannot easily generalize due to the direct use of specific entity information. We introduce Attentive Path Ranking, a novel path pattern representation that leverages type hierarchies of entities to both avoid ambiguity and maintain generalization. Then, we present an end-to-end trained attention-based RNN model to discover the new path patterns from data. Experiments conducted on benchmark knowledge base completion datasets WN18RR and FB15k-237 demonstrate that the proposed model outperforms existing methods on the fact prediction task by statistically significant margins of 26% and 10%, respectively. Furthermore, quantitative and qualitative analyses show that the path patterns balance between generalization and discrimination.

#23 K-BERT: Enabling Language Representation with Knowledge Graph [PDF] [Copy] [Kimi]

Authors: Weijie Liu ; Peng Zhou ; Zhe Zhao ; Zhiruo Wang ; Qi Ju ; Haotang Deng ; Ping Wang

Pre-trained language representation models, such as BERT, capture a general language representation from large-scale corpora, but lack domain-specific knowledge. When reading a domain text, experts make inferences with relevant knowledge. For machines to achieve this capability, we propose a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge. However, too much knowledge incorporation may divert the sentence from its correct meaning, which is called knowledge noise (KN) issue. To overcome KN, K-BERT introduces soft-position and visible matrix to limit the impact of knowledge. K-BERT can easily inject domain knowledge into the models by being equipped with a KG without pre-training by itself because it is capable of loading model parameters from the pre-trained BERT. Our investigation reveals promising results in twelve NLP tasks. Especially in domain-specific tasks (including finance, law, and medicine), K-BERT significantly outperforms BERT, which demonstrates that K-BERT is an excellent choice for solving the knowledge-driven problems that require experts.

#24 Explanations for Inconsistency-Tolerant Query Answering under Existential Rules [PDF] [Copy] [Kimi]

Authors: Thomas Lukasiewicz ; Enrico Malizia ; Cristian Molinaro

Querying inconsistent knowledge bases is a problem that has attracted a great deal of interest over the last decades. While several semantics of query answering have been proposed, and their complexity is rather well-understood, little attention has been paid to the problem of explaining query answers. Explainability has recently become a prominent problem in different areas of AI. In particular, explaining query answers allows users to understand not only what is entailed by an inconsistent knowledge base, but also why. In this paper, we address the problem of explaining query answers for existential rules under three popular inconsistency-tolerant semantics, namely, the ABox repair, the intersection of repairs, and the intersection of closed repairs semantics. We provide a thorough complexity analysis for a wide range of existential rule languages and for different complexity measures.

#25 Resilient Logic Programs: Answer Set Programs Challenged by Ontologies [PDF] [Copy] [Kimi]

Authors: Sanja Lukumbuzya ; Magdalena Ortiz ; Mantas Šimkus

We introduce resilient logic programs (RLPs) that couple a non-monotonic logic program and a first-order (FO) theory or description logic (DL) ontology. Unlike previous hybrid languages, where the interaction between the program and the theory is limited to consistency or query entailment tests, in RLPs answer sets must be ‘resilient’ to the models of the theory, allowing non-output predicates of the program to respond differently to different models. RLPs can elegantly express ∃∀∃-QBFs, disjunctive ASP, and configuration problems under incompleteness of information. RLPs are decidable when a couple of natural assumptions are made: (i) satisfiability of FO theories in the presence of closed predicates is decidable, and (ii) rules are safe in the style of the well-known DL-safeness. We further show that a large fragment of such RLPs can be translated into standard (disjunctive) ASP, for which efficient implementations exist. For RLPs with theories expressed in DLs, we use a novel relaxation of safeness that safeguards rules via predicates whose extensions can be inferred to have a finite bound. We present several complexity results for the case where ontologies are written in some standard DLs.